Post-selection inference for l1-penalized likelihood models

نویسنده

  • Wei JIANG
چکیده

According to the article[2], we present a new method for post-selection inference for l1(lasso)penalized likelihood models, including generalized regression models. Our approach generalizes the post-selection framework presented in Lee et al. (2013)[1]. The method provides P-values and confidence intervals that are asymptotically valid, conditional on the inherent selection done by the lasso. We present applications of this work to (regularized) logistic regression, Cox’s proportional hazards model, and the graphical lasso. We do not provide rigorous proofs here of the claimed results, but rather conceptual and theoretical sketches.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Penalized Bregman Divergence Estimation via Coordinate Descent

Variable selection via penalized estimation is appealing for dimension reduction. For penalized linear regression, Efron, et al. (2004) introduced the LARS algorithm. Recently, the coordinate descent (CD) algorithm was developed by Friedman, et al. (2007) for penalized linear regression and penalized logistic regression and was shown to gain computational superiority. This paper explores...

متن کامل

Variable Selection via Penalized Likelihood

Variable selection is vital to statistical data analyses. Many of procedures in use are ad hoc stepwise selection procedures, which are computationally expensive and ignore stochastic errors in the variable selection process of previous steps. An automatic and simultaneous variable selection procedure can be obtained by using a penalized likelihood method. In traditional linear models, the best...

متن کامل

Rejoinder: One-step Sparse Estimates in Nonconcave Penalized Likelihood Models By

Most traditional variable selection criteria, such as the AIC and the BIC, are (or are asymptotically equivalent to) the penalized likelihood with the L0 penalty, namely, pλ(|β|) = 2λI (|β| = 0), and with appropriate values of λ (Fan and Li [7]). In general, the optimization of the L0-penalized likelihood function via exhaustive search over all subset models is an NP-hard computational problem....

متن کامل

Comparison of Ordinal Response Modeling Methods like Decision Trees, Ordinal Forest and L1 Penalized Continuation Ratio Regression in High Dimensional Data

Background: Response variables in most medical and health-related research have an ordinal nature. Conventional modeling methods assume predictor variables to be independent, and consider a large number of samples (n) compared to the number of covariates (p). Therefore, it is not possible to use conventional models for high dimensional genetic data in which p > n. The present study compared th...

متن کامل

Dimension Reduction and Variable Selection in Case Control Studies via Regularized Likelihood Optimization

Dimension reduction and variable selection are performed routinely in case-control studies, but the literature on the theoretical aspects of the resulting estimates is scarce. We bring our contribution to this literature by studying estimators obtained via l1 penalized likelihood optimization. We show that the optimizers of the l1 penalized retrospective likelihood coincide with the optimizers ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017